Goto

Collaborating Authors

 broader impact requirement


Institutionalising Ethics in AI through Broader Impact Requirements

Prunkl, Carina, Ashurst, Carolyn, Anderljung, Markus, Webb, Helena, Leike, Jan, Dafoe, Allan

arXiv.org Artificial Intelligence

Turning principles into practice is one of the most pressing challenges of artificial intelligence (AI) governance. In this article, we reflect on a novel governance initiative by one of the world's largest AI conferences. In 2020, the Conference on Neural Information Processing Systems (NeurIPS) introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research. Drawing insights from similar governance initiatives, including institutional review boards (IRBs) and impact requirements for funding applications, we investigate the risks, challenges and potential benefits of such an initiative. Among the challenges, we list a lack of recognised best practice and procedural transparency, researcher opportunity costs, institutional and social pressures, cognitive biases, and the inherently difficult nature of the task. The potential benefits, on the other hand, include improved anticipation and identification of impacts, better communication with policy and governance experts, and a general strengthening of the norms around responsible research. To maximise the chance of success, we recommend measures to increase transparency, improve guidance, create incentives to engage earnestly with the process, and facilitate public deliberation on the requirement's merits and future. Perhaps the most important contribution from this analysis are the insights we can gain regarding effective community-based governance and the role and responsibility of the AI research community more broadly.


Exploring the impact of broader impact requirements for AI governance

#artificialintelligence

As machine learning algorithms and other artificial intelligence (AI) tools become increasingly widespread, some governments and institutions have started introducing regulations aimed at ensuring that they are ethically designed and implemented. Last year, for instance, the Neural Information Processing Systems (NeurIPS) conference introduced a new ethics-related requirement for all authors submitting AI-related research. Researchers at University of Oxford's Institute for Ethics in AI, the department of Computer Science and the Future of Humanity Institute have recently published a perspective paper that discusses the possible impact and implications of requirements such as the one introduced by the NeurIPS conference. This paper, published in Nature Machine Intelligence, also recommends a series of measures that may maximize these requirements' chance of success. "Last year, NeurIPS introduced a requirement that submitting authors include a broader impact statement in their papers," Carina E. Prunkl, one of the researchers who carried out the study, told TechXplore.